Making [vulnerable group] safe online: unpicking those who spin a line to sell their proposal

We’ve had yet another round of “online anonymity should be banned” over the weekend.

It’s one of those tropes which has not died the death it deserves, and it appears with such predictable regularity - often promulgated by privileged people who haven’t given a moment’s thought to the harms it would cause, and accompanied with a with a healthy serving of a rather self-centred “I can’t believe no-one’s thought of this before” - that those of us who’ve been round this course more than once get rather a sense of “Groundhog Day”.

sigh

Another regular screed is the anti-encryption polemic.

Encryption, of course, is one of the tools which contributes to societal well-being and safety, online and offline.

At it’s most basic, it should mean that we can transmit and receive information - our cat pics, our shopping requirements, our banking passwords, our sexual photos, our private messages, our calls with our mum or our lawyer or our doctor our or partner - without sending that information in plaintext.

It also helps from an integrity point of view, assisting to mitigate threats against unwanted / unintentional manipulation of the content of packets.

It may also assist with determining the source of a communication, through use of encryption for digital signing and verification.

Security in practice: controlled compromises, not absolutes

While, in an ideal world, encryption may deliver “absolute” security, in reality there is no such thing as “absolute” security.

Protecting everyone from every conceivable threat, now and in the future, is simply not a goal I’ve ever heard a cryptographer or security professional expound outside a theoretical conversation.

Why?

Because it’s just not practical. It’s not realistic.

And most security professionals (that I know, at least) are practical and realistic.

Security is always a compromise. A compromise between many factors. Usability. Computational resource. Ease of implementation. Cost. The harms caused by the presence of the security feature. And so on.

Typically, one determines the appropriate standard of security through threat modelling, which is a form of risk assessment and prioritisation.

You look at the various factors affecting your decision, including the adverse situations or people against which you are trying to protect and the risks you face, and you come up with the solution which is the best fit for all of those needs. In other words, which compromises can you accept, and which can you not.

(I’ve written a little about threat modelling in the “Cybersecurity for Lawyers” wiki, which is my small attempt to try to increase the level of cybersecurity awareness in the legal profession.)

If anyone tells you that their security solution - be it encryption, a VPN, or an operating system, or whatever - is absolutely secure, run like the wind in the opposite direction.

John Carr has written an interesting post about the need to evaluate with care the claims made by proponents of encryption.

He concludes that bad actors who oversell the benefits of encryption and downplay its limitations

“are not telling the truth, the whole truth and nothing but the truth. They are spinning a line.”

Yes. Well said.

Applying John’s blogpost to “online safety” claims

The underlying tenet of John’s blogpost is also true for claims about online safety.

One must approach claims that:

“doing [x] will protect [vulnerable group]”

or, even worse:

“doing [x] will make the Internet safe for [vulnerable group]”

with exactly the same scepticism as one approaches claims of “absolute” security.

The measures being proposed might increase protection, or make something incrementally safer.

But they will not make anyone, or anything, “absolutely” safe or protected.

To apply John’s logic, each time someone makes an online safety / protection claim without detailing the proposal’s weaknesses and limitations:

“they are not telling the truth, the whole truth and nothing but the truth. They are spinning a line.”

This is important. Exploiting political capital, and trading on people’s fears, by overselling a solution’s potential (often for commercial advantage), is harmful and unwelcome.

So next time we see someone parroting the line that “we” need to remove anonymity on social media or that end-to-end encryption must be banned or regulated or limited to keep people safe online, I trust that there will be the same degree of fervour to eradicate untruths, lies, and misrepresentations in those claims.

For example:

I agree with John that demanding greater scrutiny of claims of safety and security, and insisting on the utmost transparency, is not just valuable, but vital.

And that burden cannot sit with the relatively small group of people which currently examines and critiques proposals (often in the face of considerable criticism for doing so - the “screeching voices of the minority”, as one organisation put it (sorry; TechCrunch link)).

It needs to be shouldered by those putting “online safety” proposals forward, making sure that they factor this sort of thinking in their design phase, and by those advocating for these proposals, making sure that they tell the truth, the whole truth, and nothing but the truth.